3,260 research outputs found

    Using Pattern Recognition for Investment Decision Support in Taiwan Stock Market

    Get PDF
    In Taiwan stock market, it has been accumulated large amounts of time series stock data and successful investment strategies. The stock price, which is impacted by various factors, is the result of buyer-seller investment strategies. Since the stock price reflects numerous factors, its pattern can be described as the strategies of investors. In this paper, pattern recognition concept is adapted to match the current stock price trend with the repeatedly appearing past price data. Accordingly, a new method is introduced in this research that extracting features quickly from stock time series chart to find out the most critical feature points. The matching can be processed via the corresponding information of the feature points. In other words, the goal is to seek for the historical repeatedly appearing patterns, namely the similar trend, offering the investors to make investment strategies

    An Behavioral Finance Analysis Using Learning Vector Quantization in the Taiwan Stock Market Index Future

    Get PDF
    There are various types of trading behavior in the stock market. And the buying or selling activities in many investment strategies are influenced by numerous factors respectively, such as fundamental analysis, macroeconomic analysis, and news analysis. Consequently, various factors will reflect on market price. Random Walk in financial engineering is not the focus in this paper. Otherwise, the importance of the technique analysis about Taiwan Stock Index Futures will be emphasized in this research. It is the intention of this paper to investigate the information content of Open, High, Low, Close prices in the previous trading day and relative higher and lower points in the prior period of the current trading day, as well as their prices in analyzing Taiwan Stock Index Future. The predictability of Learning Vector Quantizationl Network can clearly be seen from the empirical result

    Better May Not Be Fairer: A Study on Subgroup Discrepancy in Image Classification

    Full text link
    In this paper, we provide 20,000 non-trivial human annotations on popular datasets as a first step to bridge gap to studying how natural semantic spurious features affect image classification, as prior works often study datasets mixing low-level features due to limitations in accessing realistic datasets. We investigate how natural background colors play a role as spurious features by annotating the test sets of CIFAR10 and CIFAR100 into subgroups based on the background color of each image. We name our datasets \textbf{CIFAR10-B} and \textbf{CIFAR100-B} and integrate them with CIFAR-Cs. We find that overall human-level accuracy does not guarantee consistent subgroup performances, and the phenomenon remains even on models pre-trained on ImageNet or after data augmentation (DA). To alleviate this issue, we propose \textbf{FlowAug}, a \emph{semantic} DA that leverages decoupled semantic representations captured by a pre-trained generative flow. Experimental results show that FlowAug achieves more consistent subgroup results than other types of DA methods on CIFAR10/100 and on CIFAR10/100-C. Additionally, it shows better generalization performance. Furthermore, we propose a generic metric, \emph{MacroStd}, for studying model robustness to spurious correlations, where we take a macro average on the weighted standard deviations across different classes. We show \textit{MacroStd} being more predictive of better performances; per our metric, FlowAug demonstrates improvements on subgroup discrepancy. Although this metric is proposed to study our curated datasets, it applies to all datasets that have subgroups or subclasses. Lastly, we also show superior out-of-distribution results on CIFAR10.1.Comment: 9 pages, 7 figures, ICC

    When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks

    Full text link
    Discovering and exploiting the causality in deep neural networks (DNNs) are crucial challenges for understanding and reasoning causal effects (CE) on an explainable visual model. "Intervention" has been widely used for recognizing a causal relation ontologically. In this paper, we propose a causal inference framework for visual reasoning via do-calculus. To study the intervention effects on pixel-level features for causal reasoning, we introduce pixel-wise masking and adversarial perturbation. In our framework, CE is calculated using features in a latent space and perturbed prediction from a DNN-based model. We further provide the first look into the characteristics of discovered CE of adversarially perturbed images generated by gradient-based methods \footnote{~~https://github.com/jjaacckkyy63/Causal-Intervention-AE-wAdvImg}. Experimental results show that CE is a competitive and robust index for understanding DNNs when compared with conventional methods such as class-activation mappings (CAMs) on the Chest X-Ray-14 dataset for human-interpretable feature(s) (e.g., symptom) reasoning. Moreover, CE holds promises for detecting adversarial examples as it possesses distinct characteristics in the presence of adversarial perturbations.Comment: Noted our camera-ready version has changed the title. "When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks" as the v3 official paper title in IEEE Proceeding. Please use it in your formal reference. Accepted at IEEE ICIP 2019. Pytorch code has released on https://github.com/jjaacckkyy63/Causal-Intervention-AE-wAdvIm

    Existence theorems for a crystal surface model involving the p-Laplace operator

    Full text link
    The manufacturing of crystal films lies at the heart of modern nanotechnology. How to accurately predict the motion of a crystal surface is of fundamental importance. Many continuum models have been developed for this purpose, including a number of PDE models, which are often obtained as the continuum limit of a family of kinetic Monte Carlo models of crystal surface relaxation that includes both the solid-on-solid and discrete Gaussian models. In this paper we offer an analytical perspective into some of these models. To be specific, we study the existence of a weak solution to the boundary value problem for the equation - \Delta e^{-\mbox{div}\left(|\nabla u|^{p-2}\nabla u\right)}+au=f, where p>1,a>0p>1, a>0 are given numbers and ff is a given function. This problem is derived from a crystal surface model proposed by J.L.~Marzuola and J.~Weare (2013 Physical Review, E 88, 032403). The mathematical challenge is due to the fact that the principal term in our equation is an exponential function of a p-Laplacian. Existence of a suitably-defined weak solution is established under the assumptions that pāˆˆ(1,2],Ā Nā‰¤4p\in(1,2], \ N\leq 4, and fāˆˆW1,pf\in W^{1,p}. Our investigations reveal that the key to our existence assertion is how to control the set where -\mbox{div}\left(|\nabla u|^{p-2}\nabla u\right) is Ā±āˆž\pm\infty

    A Study of Developing a System Dynamics Model for the Learning Effectiveness Evaluation

    Get PDF
    [[abstract]]This study used the research method of system dynamics and applied the Vensim software to develop a learning effectiveness evaluation model. This study developed four cause-and-effect chains affecting learning effectiveness, including teachersā€™ teaching enthusiasm, family involvement, schoolā€™s implementation of scientific activities, and creative teaching method, as well as the system dynamics model based on the four cause-and-effect chains. Based on the developed system dynamic model, this study performed simulation to investigate the relationship among family involvement, learning effectiveness, teaching achievement, creative teaching method, and studentsā€™ learning interest. The results of this study verified that there are positive correlations between family involvement and studentsā€™ learning effectiveness, as well as studentsā€™ learning effectiveness and teachersā€™ teaching achievements. The results also indicated that the use of creative teaching method is able to increase studentsā€™ learning interest and learning achievement.[[journaltype]]國外[[incitationindex]]SCI[[ispeerreviewed]]Y[[booktype]]電子ē‰ˆ[[countrycodes]]US

    The Case āˆ£ A woman with bilateral flank pain

    Get PDF

    Hotspot Analysis of Spatial Environmental Pollutants Using Kernel Density Estimation and Geostatistical Techniques

    Get PDF
    Concentrations of four heavy metals (Cr, Cu, Ni, and Zn) were measured at 1,082 sampling sites in Changhua county of central Taiwan. A hazard zone is defined in the study as a place where the content of each heavy metal exceeds the corresponding control standard. This study examines the use of spatial analysis for identifying multiple soil pollution hotspots in the study area. In a preliminary investigation, kernel density estimation (KDE) was a technique used for hotspot analysis of soil pollution from a set of observed occurrences of hazards. In addition, the study estimates the hazardous probability of each heavy metal using geostatistical techniques such as the sequential indicator simulation (SIS) and indicator kriging (IK). Results show that there are multiple hotspots for these four heavy metals and they are strongly correlated to the locations of industrial plants and irrigation systems in the study area. Moreover, the pollution hotspots detected using the KDE are the almost same to those estimated using IK or SIS. Soil pollution hotspots and polluted sampling densities are clearly defined using the KDE approach based on contaminated point data. Furthermore, the risk of hazards is explored by these techniques such as KDE and geostatistical approaches and the hotspot areas are captured without requiring exhaustive sampling anywhere
    • ā€¦
    corecore